Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌱 Call patchHelper only if necessary when reconciling external refs #11666

Merged

Conversation

sbueringer
Copy link
Member

@sbueringer sbueringer commented Jan 10, 2025

Signed-off-by: Stefan Büringer [email protected]

What this PR does / why we need it:
Whenever we reconcile external references we try to set either a controller or owner ref on these external objects.
After the ref has been addd subsequent reconciles are no-ops.

Before this PR we were always using the patchHelper to figure out if there is a delta and if the object has to be patched. For the delta calculation a lot of memory allocations where required.

With this PR we avoid using the patchHelper in these cases by simply checking if the refs are already set (in some cases we also have to check for a label).

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 10, 2025
@sbueringer
Copy link
Member Author

/cherry-pick release-1.9

@k8s-ci-robot k8s-ci-robot added do-not-merge/needs-area PR is missing an area label size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 10, 2025
@k8s-infra-cherrypick-robot

@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.9 in a new PR and assign it to you.

In response to this:

/cherry-pick release-1.9

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@sbueringer sbueringer added the area/util Issues or PRs related to utils label Jan 10, 2025
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/needs-area PR is missing an area label label Jan 10, 2025
@sbueringer
Copy link
Member Author

/cherry-pick release-1.9

@k8s-infra-cherrypick-robot

@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.9 in a new PR and assign it to you.

In response to this:

/cherry-pick release-1.9

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@sbueringer
Copy link
Member Author

/test pull-cluster-api-e2e-main

@fabriziopandini
Copy link
Member

lgtm pending linter fix.
Also, as a follow up, should we apply a similar fix to:

  • patchHelper, err := patch.NewHelper(m, r.Client)
    if err != nil {
    return err
    }
    // Set all other in-place mutable fields that impact the ability to tear down existing machines.
    m.Spec.NodeDrainTimeout = controlPlane.KCP.Spec.MachineTemplate.NodeDrainTimeout
    m.Spec.NodeDeletionTimeout = controlPlane.KCP.Spec.MachineTemplate.NodeDeletionTimeout
    m.Spec.NodeVolumeDetachTimeout = controlPlane.KCP.Spec.MachineTemplate.NodeVolumeDetachTimeout
    if err := patchHelper.Patch(ctx, m); err != nil {
    return err
    }
    controlPlane.Machines[machineName] = m
    patchHelper, err = patch.NewHelper(m, r.Client)
    if err != nil {
    return err
    }
    patchHelpers[machineName] = patchHelper
    continue
  • patchHelper, err := patch.NewHelper(kubeadmConfig, r.Client)
    if err != nil {
    return errors.Wrapf(err, "failed to reconcile certificate expiry for Machine/%s", m.Name)
    }
    if annotations == nil {
    annotations = map[string]string{}
    }
    annotations[clusterv1.MachineCertificatesExpiryDateAnnotation] = expiry
    kubeadmConfig.SetAnnotations(annotations)
    if err := patchHelper.Patch(ctx, kubeadmConfig); err != nil {
    return errors.Wrapf(err, "failed to reconcile certificate expiry for Machine/%s", m.Name)
    }
  • patchHelper, err := patch.NewHelper(m, r.Client)
    if err != nil {
    return err
    }
    if err := controllerutil.SetControllerReference(kcp, m, r.Client.Scheme()); err != nil {
    return err
    }
    // Note that ValidateOwnerReferences() will reject this patch if another
    // OwnerReference exists with controller=true.
    if err := patchHelper.Patch(ctx, m); err != nil {
    return err
    }
  • patchHelper, err := patch.NewHelper(c.Secret, r.Client)
    if err != nil {
    return err
    }
    controller := metav1.GetControllerOf(c.Secret)
    // If the current controller is KCP, ensure the owner reference is up to date.
    // Note: This ensures secrets created prior to v1alpha4 are updated to have the correct owner reference apiVersion.
    if controller != nil && controller.Kind == kubeadmControlPlaneKind {
    c.Secret.SetOwnerReferences(util.EnsureOwnerRef(c.Secret.GetOwnerReferences(), owner))
    }
    // If the Type doesn't match the type used for secrets created by core components continue without altering the owner reference further.
    // Note: This ensures that control plane related secrets created by KubeadmConfig are eventually owned by KCP.
    // TODO: Remove this logic once standalone control plane machines are no longer allowed.
    if c.Secret.Type == clusterv1.ClusterSecretType {
    // Remove the current controller if one exists.
    if controller != nil {
    c.Secret.SetOwnerReferences(util.RemoveOwnerRef(c.Secret.GetOwnerReferences(), *controller))
    }
    c.Secret.SetOwnerReferences(util.EnsureOwnerRef(c.Secret.GetOwnerReferences(), owner))
    }
    if err := patchHelper.Patch(ctx, c.Secret); err != nil {
    return errors.Wrapf(err, "failed to set ownerReference")
    }
  • obj := resource.DeepCopy()
    patchHelper, err := patch.NewHelper(obj, r.Client)
    if err != nil {
    return err
    }
    newRef := metav1.OwnerReference{
    APIVersion: addonsv1.GroupVersion.String(),
    Kind: clusterResourceSet.GroupVersionKind().Kind,
    Name: clusterResourceSet.GetName(),
    UID: clusterResourceSet.GetUID(),
    }
    obj.SetOwnerReferences(util.EnsureOwnerRef(obj.GetOwnerReferences(), newRef))
    return patchHelper.Patch(ctx, obj)
  • patchHelper, err := patch.NewHelper(m, r.Client)
    if err != nil {
    return ctrl.Result{}, err
    }
    // Set all other in-place mutable fields that impact the ability to tear down existing machines.
    m.Spec.ReadinessGates = machineSet.Spec.Template.Spec.ReadinessGates
    m.Spec.NodeDrainTimeout = machineSet.Spec.Template.Spec.NodeDrainTimeout
    m.Spec.NodeDeletionTimeout = machineSet.Spec.Template.Spec.NodeDeletionTimeout
    m.Spec.NodeVolumeDetachTimeout = machineSet.Spec.Template.Spec.NodeVolumeDetachTimeout
    // Set machine's up to date condition
    if upToDateCondition != nil {
    v1beta2conditions.Set(m, *upToDateCondition)
    }
    if err := patchHelper.Patch(ctx, m, patch.WithOwnedV1Beta2Conditions{Conditions: []string{clusterv1.MachineUpToDateV1Beta2Condition}}); err != nil {
    return ctrl.Result{}, err
    }
    continue
  • patchHelper, err := patch.NewHelper(m, r.Client)
    if err != nil {
    return ctrl.Result{}, err
    }
    v1beta2conditions.Set(m, *upToDateCondition)
    if err := patchHelper.Patch(ctx, m, patch.WithOwnedV1Beta2Conditions{Conditions: []string{clusterv1.MachineUpToDateV1Beta2Condition}}); err != nil {
    return ctrl.Result{}, err
    }

@sbueringer
Copy link
Member Author

@fabriziopandini I don't know if it's worth the additional complexity. I think these calls didn't show up in a relevant way in the memory profiles

@sbueringer sbueringer force-pushed the pr-patch-reconcile-external branch from 8546845 to 196833d Compare January 13, 2025 10:09
@@ -22,7 +22,6 @@ linters:
- errchkjson # invalid types passed to json encoder
- gci # ensures imports are organized
- ginkgolinter # ginkgo and gomega
- goconst # strings that can be replaced by constants
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never seen this linter report anything useful

@fabriziopandini
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 13, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 6d43e1b425d7a278cc867fd416d71d6c42de2e5f

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fabriziopandini

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 13, 2025
@k8s-ci-robot k8s-ci-robot merged commit d642ad2 into kubernetes-sigs:main Jan 13, 2025
18 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.10 milestone Jan 13, 2025
@k8s-infra-cherrypick-robot

@sbueringer: new pull request created: #11675

In response to this:

/cherry-pick release-1.9

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/util Issues or PRs related to utils cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants